Qingdao
BoundAD: Boundary-Aware Negative Generation for Time Series Anomaly Detection
Wang, Xiancheng, Wang, Lin, Zhang, Zhibo, Wang, Rui, Zhao, Minghang
Contrastive learning methods for time series anomaly detection (TSAD) heavily depend on the quality of negative sample construction. However, existing strategies based on random perturbations or pseudo-anomaly injection often struggle to simultaneously preserve temporal semantic consistency and provide effective decision-boundary supervision. Most existing methods rely on prior anomaly injection, while overlooking the potential of generating hard negatives near the data manifold boundary directly from normal samples themselves. To address this issue, we propose a reconstruction-driven boundary negative generation framework that automatically constructs hard negatives through the reconstruction process of normal samples. Specifically, the method first employs a reconstruction network to capture normal temporal patterns, and then introduces a reinforcement learning strategy to adaptively adjust the optimization update magnitude according to the current reconstruction state. In this way, boundary-shifted samples close to the normal data manifold can be induced along the reconstruction trajectory and further used for subsequent contrastive representation learning. Unlike existing methods that depend on explicit anomaly injection, the proposed framework does not require predefined anomaly patterns, but instead mines more challenging boundary negatives from the model's own learning dynamics. Experimental results show that the proposed method effectively improves anomaly representation learning and achieves competitive detection performance on the current dataset.
- Asia > China > Shandong Province > Qingdao (0.04)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- Transportation > Ground > Rail (0.46)
- Information Technology (0.46)
- Asia > Singapore > Central Region > Singapore (0.04)
- Asia > China > Shandong Province > Qingdao (0.04)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.93)
Synergistic Dual Spatial-aware Generation of Image-to-Text and Text-to-Image Y u Zhao
In the visual spatial understanding (VSU) area, spatial image-to-text (SI2T) and spatial text-to-image (ST2I) are two fundamental tasks that appear in dual form. Existing methods for standalone SI2T or ST2I perform imperfectly in spatial understanding, due to the difficulty of 3D-wise spatial feature modeling.
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- (22 more...)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.05)
- North America > Canada > British Columbia > Vancouver (0.04)
- Asia > South Korea > Daegu > Daegu (0.04)
- (16 more...)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Texas (0.04)
- Asia > China > Shandong Province > Qingdao (0.04)
- Asia > China > Beijing > Beijing (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > China > Shandong Province > Qingdao (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > China > Shandong Province > Qingdao (0.04)